reject option
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Regression with reject option and application to kNN
We investigate the problem of regression where one is allowed to abstain from predicting. We refer to this framework as regression with reject option as an extension of classification with reject option. In this context, we focus on the case where the rejection rate is fixed and derive the optimal rule which relies on thresholding the conditional variance function. We provide a semi-supervised estimation procedure of the optimal rule involving two datasets: a first labeled dataset is used to estimate both regression function and conditional variance function while a second unlabeled dataset is exploited to calibrate the desired rejection rate. The resulting predictor with reject option is shown to be almost as good as the optimal predictor with reject option both in terms of risk and rejection rate. We additionally apply our methodology with kNN algorithm and establish rates of convergence for the resulting kNN predictor under mild conditions. Finally, a numerical study is performed to illustrate the benefit of using the proposed procedure.
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > Middle East > Israel (0.05)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
Response to Reviewer# 1 2 (Q1) A scope of negative result is unclear
Thank you for helpful comments and suggestions. We will address the concerns raised by the reviewers. We discuss the difficulty to satisfy Corollary 5 when the problem becomes multiclass in Lines 144-158. This makes it easier to tune the hyperparameters to satisfy the necessary condition as illustrated in Eq. (9) for We checked Eq. (5) in "Learning Confidence for Out-of-Distribution Detection in Neural Networks" Regarding the problem addressed by "On calibration of modern neural